Banned At Dawn, Used By Dusk- How U.S. Military Deployed Anthropic’s Claude AI in Iran Strikes

Posted on March 02, 2026 at 07:32 PM

Banned At Dawn, Used By Dusk: How U.S. Military Deployed Anthropic’s Claude AI in Iran Strikes

In a dramatic twist that highlights the fast-moving intersection of artificial intelligence and national security, U.S. forces reportedly relied on Anthropic’s Claude AI to inform key aspects of military operations against Iran just hours after President Donald Trump ordered a federal ban on the technology. The episode underscores how deeply commercial AI has become embedded in defense systems, and why policymakers, tech companies, and military planners are grappling with new ethical and strategic questions. (Cybernews)

A Strategic Coup Amid Controversy

According to multiple reports citing The Wall Street Journal, the U.S. Central Command — the Pentagon’s main operational headquarters for the Middle East — used Claude during “Operation Epic Fury,” a coordinated air and precision strike on Iranian targets. Claude was tapped for tasks such as intelligence analysis, target identification, and tactical battle scenario simulations, roles that go beyond simple administrative assistance. (Cybernews)

What makes the development headline-worthy is the timing: just hours earlier, President Trump had publicly designated Anthropic as a “supply chain risk” and ordered most U.S. government agencies to halt using Claude’s AI. The Pentagon, however, operates under a transition period — a six-month window to remove Claude from classified systems because the model is deeply woven into military workflows. (Cybernews)

That gap between public policy pronouncements and operational realities has ignited debate: critics warn that the use of cutting-edge AI models in sensitive defense contexts raises ethical and legal questions that current regulations aren’t ready to address.

Why Claude Was Still in the Cockpit

Claude — developed by San Francisco startup Anthropic — became one of the few commercial AI models approved for use on U.S. military classified networks through partnerships with companies like Palantir and Amazon Web Services. This deep integration didn’t happen overnight: military planners had already adopted Claude for critical decision-support tasks, and transitioning to alternative AI systems — such as models from rival OpenAI — will take time as new contracts are finalized. (Breached Company)

This intense reliance on commercial technologies in defense frameworks reflects a broader trend: artificial intelligence is now woven into modern warfare logistics and planning, often far ahead of public policy debates and legislative oversight.

Political and Ethical Backdrop

The clash over Claude’s role isn’t just about military utility — it’s also political. The Trump administration’s ban and labeling of Anthropic reflect broader tensions between the government and AI companies over how flexible models should be for defense use, particularly in applications that could involve autonomous systems or domestic surveillance. Anthropic has publicly rejected stripping its safeguards for such use cases. (Benzinga)

Meanwhile, rival AI providers like OpenAI have reportedly secured agreements with the Pentagon to deploy their models rapidly in classified military roles, underscoring how competition among AI firms intersects with geopolitics. (The Guardian)

Looking Ahead

This episode may prove to be a flashpoint in how AI models are regulated for national security purposes. As military use of AI expands, the balance between ethical safeguards, strategic utility, and political oversight will remain a central challenge for governments and AI developers alike.


Glossary

  • Anthropic – A U.S. AI company focused on developing large language models like Claude with safety-first design principles. (Wikipedia)
  • Claude AI – A generative AI model designed for reasoning, text analysis, and decision support, used by governments and businesses worldwide. (Wikipedia)
  • CENTCOM – The U.S. Central Command, responsible for military operations across the Middle East. (Cybernews)
  • Supply Chain Risk – A government designation implying potential vulnerability or threat to operational integrity if continued. (Benzinga)
  • Tactical Simulation – The use of AI to model and predict outcomes of combat scenarios based on complex data inputs. (Cybernews)

Source: https://www.techinasia.com/news/us-military-uses-anthropics-claude-for-iran-strike-intelligence